Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Phys Med Biol ; 2024 May 02.
Article in English | MEDLINE | ID: mdl-38697195

ABSTRACT

OBJECTIVE: Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few X-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g., breathing). Approach: We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired X-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular X-ray projections. Specifically, PMF-STINR uses spatial implicit neural representations to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion with respect to the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning. Main results: PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc.). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (~0.1s) resolution and sub-millimeter accuracy. Significance: PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion problem from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.

2.
ArXiv ; 2024 Apr 09.
Article in English | MEDLINE | ID: mdl-38659638

ABSTRACT

Cone-beam computed tomography (CBCT) is widely used in image-guided radiotherapy. Reconstructing CBCTs from limited-angle acquisitions (LA-CBCT) is highly desired for improved imaging efficiency, dose reduction, and better mechanical clearance. LA-CBCT reconstruction, however, suffers from severe under-sampling artifacts, making it a highly ill-posed inverse problem. Diffusion models can generate data/images by reversing a data-noising process through learned data distributions; and can be incorporated as a denoiser/regularizer in LA-CBCT reconstruction. In this study, we developed a diffusion model-based framework, prior frequency-guided diffusion model (PFGDM), for robust and structure-preserving LA-CBCT reconstruction. PFGDM uses a conditioned diffusion model as a regularizer for LA-CBCT reconstruction, and the condition is based on high-frequency information extracted from patient-specific prior CT scans which provides a strong anatomical prior for LA-CBCT reconstruction. Specifically, we developed two variants of PFGDM (PFGDM-A and PFGDM-B) with different conditioning schemes. PFGDM-A applies the high-frequency CT information condition until a pre-optimized iteration step, and drops it afterwards to enable both similar and differing CT/CBCT anatomies to be reconstructed. PFGDM-B, on the other hand, continuously applies the prior CT information condition in every reconstruction step, while with a decaying mechanism, to gradually phase out the reconstruction guidance from the prior CT scans. The two variants of PFGDM were tested and compared with current available LA-CBCT reconstruction solutions, via metrics including PSNR and SSIM. PFGDM outperformed all traditional and diffusion model-based methods. PFGDM reconstructs high-quality LA-CBCTs under very-limited gantry angles, allowing faster and more flexible CBCT scans with dose reductions.

3.
Phys Med Biol ; 69(9)2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38479004

ABSTRACT

Objective. 3D cine-magnetic resonance imaging (cine-MRI) can capture images of the human body volume with high spatial and temporal resolutions to study anatomical dynamics. However, the reconstruction of 3D cine-MRI is challenged by highly under-sampled k-space data in each dynamic (cine) frame, due to the slow speed of MR signal acquisition. We proposed a machine learning-based framework, spatial and temporal implicit neural representation learning (STINR-MR), for accurate 3D cine-MRI reconstruction from highly under-sampled data.Approach. STINR-MR used a joint reconstruction and deformable registration approach to achieve a high acceleration factor for cine volumetric imaging. It addressed the ill-posed spatiotemporal reconstruction problem by solving a reference-frame 3D MR image and a corresponding motion model that deforms the reference frame to each cine frame. The reference-frame 3D MR image was reconstructed as a spatial implicit neural representation (INR) network, which learns the mapping from input 3D spatial coordinates to corresponding MR values. The dynamic motion model was constructed via a temporal INR, as well as basis deformation vector fields (DVFs) extracted from prior/onboard 4D-MRIs using principal component analysis. The learned temporal INR encodes input time points and outputs corresponding weighting factors to combine the basis DVFs into time-resolved motion fields that represent cine-frame-specific dynamics. STINR-MR was evaluated using MR data simulated from the 4D extended cardiac-torso (XCAT) digital phantom, as well as two MR datasets acquired clinically from human subjects. Its reconstruction accuracy was also compared with that of the model-based non-rigid motion estimation method (MR-MOTUS) and a deep learning-based method (TEMPEST).Main results. STINR-MR can reconstruct 3D cine-MR images with high temporal (<100 ms) and spatial (3 mm) resolutions. Compared with MR-MOTUS and TEMPEST, STINR-MR consistently reconstructed images with better image quality and fewer artifacts and achieved superior tumor localization accuracy via the solved dynamic DVFs. For the XCAT study, STINR reconstructed the tumors to a mean ± SD center-of-mass error of 0.9 ± 0.4 mm, compared to 3.4 ± 1.0 mm of the MR-MOTUS method. The high-frame-rate reconstruction capability of STINR-MR allows different irregular motion patterns to be accurately captured.Significance. STINR-MR provides a lightweight and efficient framework for accurate 3D cine-MRI reconstruction. It is a 'one-shot' method that does not require external data for pre-training, allowing it to avoid generalizability issues typically encountered in deep learning-based methods.


Subject(s)
Neoplasms , Respiration , Humans , Magnetic Resonance Imaging, Cine , Imaging, Three-Dimensional/methods , Motion , Phantoms, Imaging , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods
4.
IEEE Trans Med Imaging ; 43(3): 980-993, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37851552

ABSTRACT

Recently, the diffusion model has emerged as a superior generative model that can produce high quality and realistic images. However, for medical image translation, the existing diffusion models are deficient in accurately retaining structural information since the structure details of source domain images are lost during the forward diffusion process and cannot be fully recovered through learned reverse diffusion, while the integrity of anatomical structures is extremely important in medical images. For instance, errors in image translation may distort, shift, or even remove structures and tumors, leading to incorrect diagnosis and inadequate treatments. Training and conditioning diffusion models using paired source and target images with matching anatomy can help. However, such paired data are very difficult and costly to obtain, and may also reduce the robustness of the developed model to out-of-distribution testing data. We propose a frequency-guided diffusion model (FGDM) that employs frequency-domain filters to guide the diffusion model for structure-preserving image translation. Based on its design, FGDM allows zero-shot learning, as it can be trained solely on the data from the target domain, and used directly for source-to-target domain translation without any exposure to the source-domain data during training. We evaluated it on three cone-beam CT (CBCT)-to-CT translation tasks for different anatomical sites, and a cross-institutional MR imaging translation task. FGDM outperformed the state-of-the-art methods (GAN-based, VAE-based, and diffusion-based) in metrics of Fréchet Inception Distance (FID), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM), showing its significant advantages in zero-shot medical image translation.


Subject(s)
Cone-Beam Computed Tomography , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Cone-Beam Computed Tomography/methods , Phantoms, Imaging , Signal-To-Noise Ratio , Magnetic Resonance Imaging
5.
ArXiv ; 2023 Dec 04.
Article in English | MEDLINE | ID: mdl-38013886

ABSTRACT

Objective: Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few X-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g., breathing). Approach: We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired X-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular X-ray projections. Specifically, PMF-STINR uses spatial implicit neural representation to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion with respect to the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning. Main results: PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc.). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (~0.1s) resolution and sub-millimeter accuracy. Significance: PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.

6.
Med Phys ; 50(11): 6649-6662, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37922461

ABSTRACT

BACKGROUND: Real-time liver imaging is challenged by the short imaging time (within hundreds of milliseconds) to meet the temporal constraint posted by rapid patient breathing, resulting in extreme under-sampling for desired 3D imaging. Deep learning (DL)-based real-time imaging/motion estimation techniques are emerging as promising solutions, which can use a single X-ray projection to estimate 3D moving liver volumes by solved deformable motion. However, such techniques were mostly developed for a specific, fixed X-ray projection angle, thereby impractical to verify and guide arc-based radiotherapy with continuous gantry rotation. PURPOSE: To enable deformable motion estimation and 3D liver imaging from individual X-ray projections acquired at arbitrary X-ray scan angles, and to further improve the accuracy of single X-ray-driven motion estimation. METHODS: We developed a DL-based method, X360, to estimate the deformable motion of the liver boundary using an X-ray projection acquired at an arbitrary gantry angle (angle-agnostic). X360 incorporated patient-specific prior information from planning 4D-CTs to address the under-sampling issue, and adopted a deformation-driven approach to deform a prior liver surface mesh to new meshes that reflect real-time motion. The liver mesh motion is solved via motion-related image features encoded in the arbitrary-angle X-ray projection, and through a sequential combination of rigid and deformable registration modules. To achieve the angle agnosticism, a geometry-informed X-ray feature pooling layer was developed to allow X360 to extract angle-dependent image features for motion estimation. As a liver boundary motion solver, X360 was also combined with priorly-developed, DL-based optical surface imaging and biomechanical modeling techniques for intra-liver motion estimation and tumor localization. RESULTS: With geometry-aware feature pooling, X360 can solve the liver boundary motion from an arbitrary-angle X-ray projection. Evaluated on a set of 10 liver patient cases, the mean (± s.d.) 95-percentile Hausdorff distance between the solved liver boundary and the "ground-truth" decreased from 10.9 (±4.5) mm (before motion estimation) to 5.5 (±1.9) mm (X360). When X360 was further integrated with surface imaging and biomechanical modeling for liver tumor localization, the mean (± s.d.) center-of-mass localization error of the liver tumors decreased from 9.4 (± 5.1) mm to 2.2 (± 1.7) mm. CONCLUSION: X360 can achieve fast and robust liver boundary motion estimation from arbitrary-angle X-ray projections for real-time imaging guidance. Serving as a surface motion solver, X360 can be integrated into a combined framework to achieve accurate, real-time, and marker-less liver tumor localization.


Subject(s)
Deep Learning , Liver Neoplasms , Humans , X-Rays , Phantoms, Imaging , Motion , Liver Neoplasms/diagnostic imaging
7.
ArXiv ; 2023 Aug 18.
Article in English | MEDLINE | ID: mdl-37645038

ABSTRACT

Objective: 3D cine-magnetic resonance imaging (cine-MRI) can capture images of the human body volume with high spatial and temporal resolutions to study the anatomical dynamics. However, the reconstruction of 3D cine-MRI is challenged by highly undersampled k-space data in each dynamic (cine) frame, due to the slow speed of MR signal acquisition. We proposed a machine learning-based framework, spatial and temporal implicit neural representation learning (STINR-MR), for accurate 3D cine-MRI reconstruction from highly undersampled data. Approach: STINR-MR used a joint reconstruction and deformable registration approach to achieve a high acceleration factor for cine volumetric imaging. It addressed the ill-posed spatiotemporal reconstruction problem by solving a reference-frame 3D MR image and a corresponding motion model which deforms the reference frame to each cine frame. The reference-frame 3D MR image was reconstructed as a spatial implicit neural representation (INR) network, which learns the mapping from input 3D spatial coordinates to corresponding MR values. The dynamic motion model was constructed via a temporal INR, as well as basis deformation vector fields (DVFs) extracted from prior/onboard 4D-MRIs using principal component analysis (PCA). The learned temporal INR encodes input time points and outputs corresponding weighting factors to combine the basis DVFs into time-resolved motion fields that represent cine-frame-specific dynamics. STINR-MR was evaluated using MR data simulated from the 4D extended cardiac-torso (XCAT) digital phantom, as well as MR data acquired clinically from a healthy human subject. Its reconstruction accuracy was also compared with that of the model-based non-rigid motion estimation method (MR-MOTUS). Main results: STINR-MR can reconstruct 3D cine-MR images with high temporal (<100 ms) and spatial (3 mm) resolutions. Compared with MR-MOTUS, STINR-MR consistently reconstructed images with better image quality and fewer artifacts and achieved superior tumor localization accuracy via the solved dynamic DVFs. For the XCAT study, STINR reconstructed the tumors to a mean±S.D. center-of-mass error of 1.0±0.4 mm, compared to 3.4±1.0 mm of the MR-MOTUS method. The high-frame-rate reconstruction capability of STINR-MR allows different irregular motion patterns to be accurately captured. Significance: STINR-MR provides a lightweight and efficient framework for accurate 3D cine-MRI reconstruction. It is a 'one-shot' method that does not require external data for pre-training, allowing it to avoid generalizability issues typically encountered in deep learning-based methods.

8.
Phys Med Biol ; 68(6)2023 03 09.
Article in English | MEDLINE | ID: mdl-36731143

ABSTRACT

Objective. Real-time imaging, a building block of real-time adaptive radiotherapy, provides instantaneous knowledge of anatomical motion to drive delivery adaptation to improve patient safety and treatment efficacy. The temporal constraint of real-time imaging (<500 milliseconds) significantly limits the imaging signals that can be acquired, rendering volumetric imaging and 3D tumor localization extremely challenging. Real-time liver imaging is particularly difficult, compounded by the low soft tissue contrast within the liver. We proposed a deep learning (DL)-based framework (Surf-X-Bio), to track 3D liver tumor motion in real-time from combined optical surface image and a single on-board x-ray projection.Approach. Surf-X-Bio performs mesh-based deformable registration to track/localize liver tumors volumetrically via three steps. First, a DL model was built to estimate liver boundary motion from an optical surface image, using learnt motion correlations between the respiratory-induced external body surface and liver boundary. Second, the residual liver boundary motion estimation error was further corrected by a graph neural network-based DL model, using information extracted from a single x-ray projection. Finally, a biomechanical modeling-driven DL model was applied to solve the intra-liver motion for tumor localization, using the liver boundary motion derived via prior steps.Main results. Surf-X-Bio demonstrated higher accuracy and better robustness in tumor localization, as compared to surface-image-only and x-ray-only models. By Surf-X-Bio, the mean (±s.d.) 95-percentile Hausdorff distance of the liver boundary from the 'ground-truth' decreased from 9.8 (±4.5) (before motion estimation) to 2.4 (±1.6) mm. The mean (±s.d.) center-of-mass localization error of the liver tumors decreased from 8.3 (±4.8) to 1.9 (±1.6) mm.Significance. Surf-X-Bio can accurately track liver tumors from combined surface imaging and x-ray imaging. The fast computational speed (<250 milliseconds per inference) allows it to be applied clinically for real-time motion management and adaptive radiotherapy.


Subject(s)
Liver Neoplasms , Humans , X-Rays , Radiography , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/radiotherapy , Neural Networks, Computer , Motion , Imaging, Three-Dimensional/methods
9.
Phys Med Biol ; 68(4)2023 02 06.
Article in English | MEDLINE | ID: mdl-36638543

ABSTRACT

Objective. Dynamic cone-beam CT (CBCT) imaging is highly desired in image-guided radiation therapy to provide volumetric images with high spatial and temporal resolutions to enable applications including tumor motion tracking/prediction and intra-delivery dose calculation/accumulation. However, dynamic CBCT reconstruction is a substantially challenging spatiotemporal inverse problem, due to the extremely limited projection sample available for each CBCT reconstruction (one projection for one CBCT volume).Approach. We developed a simultaneous spatial and temporal implicit neural representation (STINR) method for dynamic CBCT reconstruction. STINR mapped the unknown image and the evolution of its motion into spatial and temporal multi-layer perceptrons (MLPs), and iteratively optimized the neuron weightings of the MLPs via acquired projections to represent the dynamic CBCT series. In addition to the MLPs, we also introduced prior knowledge, in the form of principal component analysis (PCA)-based patient-specific motion models, to reduce the complexity of the temporal mapping to address the ill-conditioned dynamic CBCT reconstruction problem. We used the extended-cardiac-torso (XCAT) phantom and a patient 4D-CBCT dataset to simulate different lung motion scenarios to evaluate STINR. The scenarios contain motion variations including motion baseline shifts, motion amplitude/frequency variations, and motion non-periodicity. The XCAT scenarios also contain inter-scan anatomical variations including tumor shrinkage and tumor position change.Main results. STINR shows consistently higher image reconstruction and motion tracking accuracy than a traditional PCA-based method and a polynomial-fitting-based neural representation method. STINR tracks the lung target to an average center-of-mass error of 1-2 mm, with corresponding relative errors of reconstructed dynamic CBCTs around 10%.Significance. STINR offers a general framework allowing accurate dynamic CBCT reconstruction for image-guided radiotherapy. It is a one-shot learning method that does not rely on pre-training and is not susceptible to generalizability issues. It also allows natural super-resolution. It can be readily applied to other imaging modalities as well.


Subject(s)
Lung Neoplasms , Lung , Humans , Motion , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/radiotherapy , Phantoms, Imaging , Cone-Beam Computed Tomography/methods , Algorithms , Image Processing, Computer-Assisted/methods , Four-Dimensional Computed Tomography/methods
10.
Phys Med Biol ; 67(13)2022 06 29.
Article in English | MEDLINE | ID: mdl-35667374

ABSTRACT

Purpose. Real-time three-dimensional (3D) magnetic resonance (MR) imaging is challenging because of slow MR signal acquisition, leading to highly under-sampled k-space data. Here, we proposed a deep learning-based, k-space-driven deformable registration network (KS-RegNet) for real-time 3D MR imaging. By incorporating prior information, KS-RegNet performs a deformable image registration between a fully-sampled prior image and on-board images acquired from highly-under-sampled k-space data, to generate high-quality on-board images for real-time motion tracking.Methods. KS-RegNet is an end-to-end, unsupervised network consisting of an input data generation block, a subsequent U-Net core block, and following operations to compute data fidelity and regularization losses. The input data involved a fully-sampled, complex-valued prior image, and the k-space data of an on-board, real-time MR image (MRI). From the k-space data, under-sampled real-time MRI was reconstructed by the data generation block to input into the U-Net core. In addition, to train the U-Net core to learn the under-sampling artifacts, the k-space data of the prior image was intentionally under-sampled using the same readout trajectory as the real-time MRI, and reconstructed to serve an additional input. The U-Net core predicted a deformation vector field that deforms the prior MRI to on-board real-time MRI. To avoid adverse effects of quantifying image similarity on the artifacts-ridden images, the data fidelity loss of deformation was evaluated directly in k-space.Results. Compared with Elastix and other deep learning network architectures, KS-RegNet demonstrated better and more stable performance. The average (±s.d.) DICE coefficients of KS-RegNet on a cardiac dataset for the 5- , 9- , and 13-spoke k-space acquisitions were 0.884 ± 0.025, 0.889 ± 0.024, and 0.894 ± 0.022, respectively; and the corresponding average (±s.d.) center-of-mass errors (COMEs) were 1.21 ± 1.09, 1.29 ± 1.22, and 1.01 ± 0.86 mm, respectively. KS-RegNet also provided the best performance on an abdominal dataset.Conclusion. KS-RegNet allows real-time MRI generation with sub-second latency. It enables potential real-time MR-guided soft tissue tracking, tumor localization, and radiotherapy plan adaptation.


Subject(s)
Artifacts , Magnetic Resonance Imaging , Abdomen , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Motion
11.
Phys Med Biol ; 67(11)2022 05 24.
Article in English | MEDLINE | ID: mdl-35483350

ABSTRACT

Objective.Real-time imaging is highly desirable in image-guided radiotherapy, as it provides instantaneous knowledge of patients' anatomy and motion during treatments and enables online treatment adaptation to achieve the highest tumor targeting accuracy. Due to extremely limited acquisition time, only one or few x-ray projections can be acquired for real-time imaging, which poses a substantial challenge to localize the tumor from the scarce projections. For liver radiotherapy, such a challenge is further exacerbated by the diminished contrast between the tumor and the surrounding normal liver tissues. Here, we propose a framework combining graph neural network-based deep learning and biomechanical modeling to track liver tumor in real-time from a single onboard x-ray projection.Approach.Liver tumor tracking is achieved in two steps. First, a deep learning network is developed to predict the liver surface deformation using image features learned from the x-ray projection. Second, the intra-liver deformation is estimated through biomechanical modeling, using the liver surface deformation as the boundary condition to solve tumor motion by finite element analysis. The accuracy of the proposed framework was evaluated using a dataset of 10 patients with liver cancer.Main results.The results show accurate liver surface registration from the graph neural network-based deep learning model, which translates into accurate, fiducial-less liver tumor localization after biomechanical modeling (<1.2 (±1.2) mm average localization error).Significance.The method demonstrates its potentiality towards intra-treatment and real-time 3D liver tumor monitoring and localization. It could be applied to facilitate 4D dose accumulation, multi-leaf collimator tracking and real-time plan adaptation. The method can be adapted to other anatomical sites as well.


Subject(s)
Liver Neoplasms , Radiotherapy, Image-Guided , Humans , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/radiotherapy , Neural Networks, Computer , Radiography , Radiotherapy, Image-Guided/methods , X-Rays
12.
Med Phys ; 48(12): 7790-7805, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34632589

ABSTRACT

PURPOSE: Recently, two-dimensional-to-three-dimensional (2D-3D) deformable registration has been applied to deform liver tumor contours from prior reference images onto estimated cone-beam computed tomography (CBCT) target images to automate on-board tumor localizations. Biomechanical modeling has also been introduced to fine-tune the intra-liver deformation-vector-fields (DVFs) solved by 2D-3D deformable registration, especially at low-contrast regions, using tissue elasticity information and liver boundary DVFs. However, the caudal liver boundary shows low contrast from surrounding tissues in the cone-beam projections, which degrades the accuracy of the intensity-based 2D-3D deformable registration there and results in less accurate boundary conditions for biomechanical modeling. We developed a deep-learning (DL)-based method to optimize the liver boundary DVFs after 2D-3D deformable registration to further improve the accuracy of subsequent biomechanical modeling and liver tumor localization. METHODS: The DL-based network was built based on the U-Net architecture. The network was trained in a supervised fashion to learn motion correlation between cranial and caudal liver boundaries to optimize the liver boundary DVFs. Inputs of the network had three channels, and each channel featured the 3D DVFs estimated by the 2D-3D deformable registration along one Cartesian direction (x, y, z). To incorporate patient-specific liver boundary information into the DVFs, the DVFs were masked by a liver boundary ring structure generated from the liver contour of the prior reference image. The network outputs were the optimized DVFs along the liver boundary with higher accuracy. From these optimized DVFs, boundary conditions were extracted for biomechanical modeling to further optimize the solution of intra-liver tumor motion. We evaluated the method using 34 liver cancer patient cases, with 24 for training and 10 for testing. We evaluated and compared the performance of three methods: 2D-3D deformable registration, 2D-3D-Bio (2D-3D deformable registration with biomechanical modeling), and DL-Bio (DL model prediction with biomechanical modeling). The tumor localization errors were quantified through calculating the center-of-mass-errors (COMEs), DICE coefficients, and Hausdorff distance between deformed liver tumor contours and manually segmented "gold-standard" contours. RESULTS: The predicted DVFs by the DL model showed improved accuracy at the liver boundary, which translated into more accurate liver tumor localizations through biomechanical modeling. On a total of 90 evaluated images and tumor contours, the average (± sd) liver tumor COMEs of the 2D-3D, 2D-3D-Bio, and DL-Bio techniques were 4.7 ± 1.9 mm, 2.9 ± 1.0 mm, and 1.7 ± 0.4 mm. The corresponding average (± sd) DICE coefficients were 0.60 ± 0.12, 0.71 ± 0.07, and 0.78 ± 0.03; and the average (± sd) Hausdorff distances were 7.0 ± 2.6 mm, 5.4 ± 1.5 mm, and 4.5 ± 1.3 mm, respectively. CONCLUSION: DL-Bio solves a general correlation model to improve the accuracy of the DVFs at the liver boundary. With improved boundary conditions, the accuracy of biomechanical modeling can be further increased for accurate intra-liver low-contrast tumor localization.


Subject(s)
Deep Learning , Liver Neoplasms , Algorithms , Cone-Beam Computed Tomography , Humans , Image Processing, Computer-Assisted , Liver Neoplasms/diagnostic imaging
13.
Ultramicroscopy ; 205: 70-74, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31247455

ABSTRACT

We show images produced by an electron beam deflector, a quadrupole lens and a einzel lens fabricated from conducting and non-conducting plastic using a 3D printer. Despite the difficulties associated with the use of plastics in vacuum, such as outgassing, poor conductivity, and print defects, the devices were used successfully in vacuum to steer, stretch and focus electron beams to millimeter diameters. Simulations indicate that much smaller focus spot sizes might be possible for such 3D-printed plastic electron lenses taking into account some possible surface defects. This work was motivated by our need to place electron optical components in difficult-to-access geometries. Our proof-of-principle demonstration opens the door to consider 3D-printed electron microscopes, whose reduced cost would make such microscopes more widely available. Potentially, this may have a significant impact on electron beam science and technology in general and electron microscopy in particular.

14.
Phys Rev Lett ; 105(26): 263201, 2010 Dec 31.
Article in English | MEDLINE | ID: mdl-21231655

ABSTRACT

The detection of spatial and temporal electronic motion by scattering of subfemtosecond pulses of 10 keV electrons from coherent superpositions of electronic states of both H and T2(+) is investigated. For the H atom, we predict changes in the diffraction images that reflect the time-dependent effective radius of the electronic charge density. For an aligned T2(+) molecule, the diffraction image changes reflect the time-dependent localization or delocalization of the electronic charge density.

15.
J Chem Phys ; 130(1): 014301, 2009 Jan 07.
Article in English | MEDLINE | ID: mdl-19140609

ABSTRACT

A second example of a barrierless reaction between two closed-shell molecules is reported. The reaction F(2)+CH(3)SSCH(3) has been investigated with crossed molecular beam experiments and ab initio calculations. Compared with previous results of the F(2)+CH(3)SCH(3) reaction [J. Chem. Phys. 127, 101101 (2007); J. Chem. Phys. 128, 104317 (2008)], a new product channel leading to CH(3)SF+CH(3)SF is observed to be predominant in the title reaction, whereas the anticipated HF+C(2)H(5)S(2)F channel is not found. In addition, the F+C(2)H(6)S(2)F product channel, the analog to the F+C(2)H(6)SF channel in the F(2)+CH(3)SCH(3) reaction, opens up at collision energies higher than 4.3 kcal/mol. Angular and translational energy distributions of the products are reported and collision energy dependences of the reaction cross section and product branching ratio are shown. The reaction barrier is found to be negligible (<<1 kcal/mol). Multireference ab initio calculations suggest a reaction mechanism involving a short-lived intermediate which can be formed without activation energy.

16.
J Chem Phys ; 128(18): 184302, 2008 May 14.
Article in English | MEDLINE | ID: mdl-18532807

ABSTRACT

The reaction of F(2)+C(2)H(4) has been investigated with crossed molecular beam experiments and high level ab initio calculations. For a wide range of collision energies up to 11 kcal/mol, only one reaction channel could be observed in the gas phase. The primary products of this channel were identified as F+CH(2)CH(2)F. The experimental reaction threshold of collision energy was determined to be 5.5+/-0.5 kcal/mol. The product angular distribution was found to be strongly backward, indicating that the reaction time scale is substantially shorter than rotation. The calculated transition state structure suggests an early barrier; such dynamics is consistent with the small product kinetic energy release measured in the experiment. All experimental results consistently support a rebound reaction mechanism, which is suggested by the calculation of the intrinsic reaction coordinate. This work provides a clear and unambiguous description of the reaction dynamics, which may help to answer the question why the same reaction produces totally different products in the condensed phase.

SELECTION OF CITATIONS
SEARCH DETAIL
...